nvme 固态盘raid10安装部署 您所在的位置:网站首页 swift514-51 nvme nvme 固态盘raid10安装部署

nvme 固态盘raid10安装部署

2023-03-31 23:11| 来源: 网络整理| 查看: 265

[root@CTP-TESTDB ~]# fdisk -l

Disk /dev/nvme1n1: 6401.3 GB, 6401252745216 bytes, 12502446768 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/nvme0n1: 6401.3 GB, 6401252745216 bytes, 12502446768 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 2398.0 GB, 2397998940160 bytes, 4683591680 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk label type: gpt

Disk identifier: AAD3721D-CEB8-4E26-B327-029933870DFA

Start End Size Type Name

1026047 500M EFI System EFI System Partition

2050047 500M Microsoft basic

4683589631 2.2T Linux LVM

Disk /dev/mapper/rhel-root: 536.9 GB, 536870912000 bytes, 1048576000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes Disk /dev/mapper/rhel-swap: 34.4 GB, 34359738368 bytes, 67108864 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk /dev/mapper/rhel-u01: 107.4 GB, 107374182400 bytes, 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk /dev/mapper/rhel-u02: 1718.3 GB, 1718339239936 bytes, 3356131328 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

软raid

/dev/nvme1n1

/dev/nvme0n1

[root@CTP-TESTDB ~]# fdisk /dev/nvme1n1

查看分区的uuid

1.1

nvme0 nvme0n1

[root@CTP-TESTDB ~]# parted /dev/nvme0n1

GNU Parted 3.1

Using /dev/nvme0n1

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) mklabel ★

New disk label type? gpt ★

Warning: The existing disk label on /dev/nvme0n1 will be destroyed and all data on this disk will be lost. Do you want to continue?

Yes/No? yes ★

(parted) print ★

Model: NVMe Device (nvme)

Disk /dev/nvme0n1: 6401GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt ★

Disk Flags:

Number Start End Size File system Name Flags

(parted) mkpart★

Partition name? []? data ★

File system type? [ext2]? xfs ★

Start? 0 ★

End? 6T ★

Warning: The resulting partition is not properly aligned for best performance.

Ignore/Cancel? ingore

parted: invalid token: ingore

Ignore/Cancel? Ignore ★

(parted) q

Information: You may need to update /etc/fstab.

[root@CTP-TESTDB ~]#

[root@CTP-TESTDB ~]#

[root@CTP-TESTDB ~]# fdisk -l

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/nvme1n1: 6401.3 GB, 6401252745216 bytes, 12502446768 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

Disk identifier: 9D5BBC83-B005-4598-A025-2467A9D1D5C2

Start End Size Type Name

12502446734 5.8T Microsoft basic data

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/nvme0n1: 6401.3 GB, 6401252745216 bytes, 12502446768 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

Disk identifier: 31213F61-02AD-4C2F-8A9D-7ADD34B05A11

Start End Size Type Name

12502446734 5.8T Microsoft basic data

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 2398.0 GB, 2397998940160 bytes, 4683591680 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk label type: gpt

Disk identifier: AAD3721D-CEB8-4E26-B327-029933870DFA

Start End Size Type Name

1026047 500M EFI System EFI System Partition

2050047 500M Microsoft basic

4683589631 2.2T Linux LVM

Disk /dev/mapper/rhel-root: 536.9 GB, 536870912000 bytes, 1048576000 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk /dev/mapper/rhel-swap: 34.4 GB, 34359738368 bytes, 67108864 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk /dev/mapper/rhel-u01: 107.4 GB, 107374182400 bytes, 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Disk /dev/mapper/rhel-u02: 1718.3 GB, 1718339239936 bytes, 3356131328 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 262144 bytes / 262144 bytes

2

[root@CTP-TESTDB ~]# partprobe /dev/nvme0n1

[root@CTP-TESTDB ~]# partprobe /dev/nvme1n1

[root@CTP-TESTDB ~]# cat /proc/partitions

major minor #blocks name

6251223384 nvme1n1

6251223350 nvme1n1p1

6251223384 nvme0n1

6251223350 nvme0n1p1

2341795840 sda

512000 sda1

512000 sda2

2340769792 sda3

524288000 dm-0

33554432 dm-1

104857600 dm-2

1678065664 dm-3

-2.1

[root@CTP-TESTDB ~]# mdadm -C /dev/m

mapper/ megaraid_sas_ioctl_node mem

mcelog mei0 mqueue/ [root@CTP-TESTDB ~]# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/nvme nvme0 nvme0n1 nvme0n1p1 nvme1 nvme1n1 nvme1n1p1 [root@CTP-TESTDB ~]# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/nvme0n1p1 /dev/nvme1 nvme1 nvme1n1 nvme1n1p1 [root@CTP-TESTDB ~]# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/nvme0n1p1 /dev/nvme1n1p1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use

metadata=0.90 Continue creating array? y ★ mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. -2.2 [root@CTP-TESTDB ~]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 nvme1n1p1[1] nvme0n1p1[0] blocks super 1.2 [2/2] [UU] [>....................] resync = 0.5% (33323840/6251091200) finish=163.5min speed=633620K/sec bitmap: 47/47 pages [188KB], 65536KB chunk unused devices: [root@CTP-TESTDB ~]# -2.3 [root@CTP-TESTDB ~]# mkfs.xfs /dev/md1 meta-data=/dev/md1 isize=512 agcount=6, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=1562772800, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@CTP-TESTDB ~]#

查看软raid 的uuid 和 同步状态以及设备状态 [root@CTP-TESTDB ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Fri Jan 1 13:42:34 2021 Raid Level : raid1 Array Size : 6251091200 (5961.51 GiB 6401.12 GB) Used Dev Size : 6251091200 (5961.51 GiB 6401.12 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Jan 1 14:16:51 2021 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Resync Status : 19% complete Name : CTP-KY-NEW-HOTDB:1 (local to host CTP-KY-NEW-HOTDB) UUID : 4c5e7a61:8461f0f0:200f298a:387664fd Events : 419 Number Major Minor RaidDevice State 2 0 active sync /dev/nvme0n1p1 3 1 active sync /dev/nvme1n1p1 [root@CTP-TESTDB ~]#

[root@CTP-TESTDB ~]# cat /etc/mdadm.conf ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=localhost.localdomain:0 UUID=4c5e7a61:8461f0f0:200f298a:387664fd devices=/dev/nvme0n1p1,/dev/nvme1n1p1 [root@CTP-TESTDB ~]#

[root@CTP-TESTDB ~]# cat /etc/fstab /etc/fstab Created by anaconda on Mon Dec 28 10:13:06 2020 Accessible filesystems, by reference, are maintained under '/dev/disk' See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info /dev/mapper/rhel-root / xfs defaults 0 0 UUID=d0366959-3810-48a4-a82a-6f40d3c0939a /boot xfs defaults 0 0 UUID=F2C7-CC7C /boot/efi vfat umask=0077,shortname=winnt 0 0 /dev/mapper/rhel-u01 /u01 xfs defaults 0 0

/dev/mapper/rhel-u02 /u02 xfs defaults 0 0 /dev/mapper/rhel-swap swap swap defaults 0 0 /dev/md1 /data xfs defaults 0 0

安装完成

[root@CTP-KY-NEW-HOTDB ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 500G 20G 481G 4% / devtmpfs 63G 0 63G 0% /dev tmpfs 63G 0 63G 0% /dev/shm tmpfs 63G 68M 63G 1% /run tmpfs 63G 0 63G 0% /sys/fs/cgroup /dev/md1 5.9T 1.7T 4.2T 29% /data /dev/sda2 494M 144M 351M 30% /boot /dev/sda1 500M 9.8M 490M 2% /boot/efi /dev/mapper/rhel-u02 1.6T 5.0G 1.6T 1% /u02 /dev/mapper/rhel-u01 100G 4.5G 96G 5% /u01 tmpfs 13G 36K 13G 1% /run/user/0 /dev/loop0 4.2G 4.2G 0 100% /mnt tmpfs 13G 0 13G 0% /run/user/1001 [root@CTP-KY-NEW-HOTDB ~]#



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有